3 research outputs found
Transmedial Documentation for Non-Visual Image Access
In my doctoral studies on information accessibility for the individual who is blind or visually impaired, I’ve been exploring the ways we can make image documents more accessible. This requires using an alternative sensory modality, and translating the document into a different format. The questions that arise when we consider this process are many, but among them are: Is it the same document once we’ve converted it to an audio narrative about the work, or a 3D topographic map of an artwork, or a musical interpretation? If it is not the same document, how truthful can the “trans-medial” translation be to the original work? Are such efforts valid and useful?
I hope to work with users who have low vision to determine if these image re-documentations are indeed useful and what means of representation are preferred. We now convert textbooks to audio books or electronic texts readable by special equipment, but how do we treat the images in these documents? The images are part of a whole (the textbook), but are also documents in and of themselves. They may have a history apart from the work within which they’re found. They may be reproduced with permission from copyright holders. What is the best practice for describing an image when reading a text to someone who cannot see?
These issues of documentation are part the exploration now under way. I will present several examples of approaches to addressing the problem as provocation for discussion
Extraction and parsing of herbarium specimen data: Exploring the use of the Dublin core application profile framework
Herbaria around the world house millions of plant specimens; botanists and other researchers value these resources as ingredients in biodiversity research. Even when the specimen sheets are digitized and made available online, the critical information about the specimen stored on the sheet are not in a usable (i.e., machine-processible) form. This paper describes a current research and development project that is designing and testing high-throughput workflows that combine machine- and human-processes to extract and parse the specimen label data. The primary focus of the paper is the metadata needs for the workflow and the creation of the structured metadata records describing the plant specimen. In the project, we are exploring the use of the new Dublin Core Metadata Initiative framework for application profiles. First articulated as the Singapore Framework for Dublin Core Application Profiles in 2007, the use of this framework is in its infancy. The promises of this framework for maximum interoperability and for documenting the use of metadata for maximum reusability, and for supporting metadata applications that are in conformance with Web architectural principles provide the incentive to explore and add implementation experience regarding this new framework
Outside the Frame: Modeling Discontinuities in Video Stimulus Streams
How are we to get beyond the literary metaphor Augst asserts is
central problem with film analysis? How are we to step outside
the "shot" as the unit of analysis - the "shot" which Bonitzer
claims is useless for analysis because of researchers' "endlessly
bifurcated" definitions of "shot?
We have had success with a form of computational structural
analysis which incorporates the viewer into the model.
Comparing changes in levels of Red, Green, and Blue from frame
to frame and comparing the patterns of change with an expert
film theorist's model.
We are currently analyzing discontinuities in the entire data
stream of a film. We are asking just what aspects of the data
stream account for viewer reactions. We are examining
distribution of color, edges, luminance, and other components.
By modeling changes in the various stimuli over time within a
vector space model and comparing those changes with the
responses of (at first) an expert viewer, then with a variety of
viewers we should be able to make strides in matching forms of
representation to the most effective mode of representation for
the individual user; and at the same time provide a set of analytic
tools that account for the multiple time-varying signals that make
up a movie, whether a cell phone video or Hollywood
blockbuster.
Significantly, we now step outside the frame as the unit of
analysis and look to the possibilities of analysis at the sub pixel
level. That is, analysis of one component of a pixel location such
as luminance or merely the green component (no red or blue
provides a very fine grained level of examination. At the same
time, the vector space model provides a way of examining the
stimulus effect of multiple threads that do not necessarily change
in synch.
As we consider these possibilities, we begin to see a general
model of a document as a continuous stream of data that either
(as a whole or in part) functions as a stimulus or does not.
Our poster will present graphical representations of changes in
the data stream for the "Bodega Bay" sequence of Hichcock's
THE BIRDS and the reactions of Raymond Bellour, whose
analyses and modeling of Hichcok's works and of classic
Hollywood film in general are held in high regard. We begin
with Bellour and the Bodega Bay sequence because we have
already published research on this data and, thus, have a
significant foundation upon which to build. We will then apply
the same techniques to a set of other works